Fast Reinforcement Learning in Continuous Action Spaces

نویسنده

  • Daniel N. Nikovski
چکیده

A new fast algorithm for reinforcement learning (RL) in continuous state and action spaces is proposed. Unlike algorithms based on dynamic programming, the proposed algorithm uses neither temporal nor spatial diierencing. Instead, it couples the solution of the Hamilton-Jacobi-Bellman (HJB) partial diierential equation with the structure of the function approximators that are commonly used together with RL algorithms to provide generalization over state space. The proposed technique bears close resemblance to the nite element method (FEM) widely used in physics and engineering. Pontryagin's optimal principle is applied without introducing any penalty functions, and locally-weighted function approximation is employed for generalization. As a result, the HJB equation is reduced to a set of linear algebraic equations which can be solved iteratively to yield the optimal value function of the problem in very few iterations.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Continuous-action reinforcement learning with fast policy search and adaptive basis function selection

As an important approach to solving complex sequential decision problems, reinforcement learning (RL) has been widely studied in the community of artificial intelligence and machine learning. However, the generalization ability of RL is still an open problem and it is difficult for existing RL algorithms to solve Markov decision problems (MDPs) with both continuous state and action spaces. In t...

متن کامل

Reinforcement Learning in Continuous Action Spaces through Sequential Monte Carlo Methods

Learning in real-world domains often requires to deal with continuous state and action spaces. Although many solutions have been proposed to apply Reinforcement Learning algorithms to continuous state problems, the same techniques can be hardly extended to continuous action spaces, where, besides the computation of a good approximation of the value function, a fast method for the identification...

متن کامل

Mulitagent Reinforcement Learning in Stochastic Games with Continuous Action Spaces

We investigate the learning problem in stochastic games with continuous action spaces. We focus on repeated normal form games, and discuss issues in modelling mixed strategies and adapting learning algorithms in finite-action games to the continuous-action domain. We applied variable resolution techniques to two simple multi-agent reinforcement learning algorithms PHC and MinimaxQ. Preliminary ...

متن کامل

Operation Scheduling of MGs Based on Deep Reinforcement Learning Algorithm

: In this paper, the operation scheduling of Microgrids (MGs), including Distributed Energy Resources (DERs) and Energy Storage Systems (ESSs), is proposed using a Deep Reinforcement Learning (DRL) based approach. Due to the dynamic characteristic of the problem, it firstly is formulated as a Markov Decision Process (MDP). Next, Deep Deterministic Policy Gradient (DDPG) algorithm is presented t...

متن کامل

Deep Reinforcement Learning in Parameterized Action Space

Recent work has shown that deep neural networks are capable of approximating both value functions and policies in reinforcement learning domains featuring continuous state and action spaces. However, to the best of our knowledge no previous work has succeeded at using deep neural networks in structured (parameterized) continuous action spaces. To fill this gap, this paper focuses on learning wi...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2007